Understanding Software Engineering: From Analogies With Other Disciplines

نویسنده

  • Jason Baragry
چکیده

ion is used to devise concepts and theories that encompass a wider range of applicability than the original theories. For example, the role of mathematical modelling 30 The views of Kuhn and Feyerabend are discussed in more detail in the next chapter, along with other philosophers of science such as Lakatos and Laudan. A Foundation for Software Engineering Understanding Software Engineering 252 in social science deals with abstract idealisations of real world entities and not with the direct concepts. “We have learned that pure mathematics is neutral and, when applied, it is applied to our ideas about some matter of fact, and not the facts themselves. What gets mathematized is not a chunk of reality but some of our ideas about it.” (Bunge 1973). One of the most influential tools in the generalisation of concepts and theories to encompass a wider area of applicability is the use of analogy. For example, as mentioned in chapter 3, Ohm devised the law of basic current flow by studying the well-established theories of hydrodynamic systems (Jungnickel and McCormmach 1986). Ohm was able to show that both disciplines contained the concepts of substance flow through a conductive element, the concept of force to generate the flow, and the concept of resistance to the flow in that conductive element. He was then able to devise a mathematical relationship for current flow based on the established mathematical relationships that described hydrodynamic systems. By developing a more abstract theory that describes two or more systems, if only in a sketchy way, it is possible to identify analogies and thereby generate theories to explain phenomena in new fields or generate encompassing theories across a range of fields (Bunge 1973). However, although an abstract theory can be used to explain the phenomena in two distinct situations, it is only because of an identified similarity. It does not mean the two situations are identical or that other abstract theories in one of those areas can be used to explain phenomena in the other. Analogy is a very powerful tool in developing new theories. However, it is only able to suggest equivalence without being able to establish it. Therefore, the reckless use of analogy has also been misleading in scientific research. Bunge provides many good examples of this in a range of fields including quantum physics, information and entropy, and social evolution. Identity implies equality, equality implies equivalence, and equivalence implies similarity, however the converse is not true. “Analogy is undoubtedly prolific, but it gives birth to as many monsters as healthy babies. In either case its products ... are just that: newborns that must be reared, if at all, rather than worshipped” (Bunge 1973).. Bunge continues by quoting Gerard, “Analogical thinking is ... in our view not so much a source of answers on the nature of phenomena as a source of challenging questions”. (Bunge 1973). A Foundation for Software Engineering Understanding Software Engineering 253 Many issues of epistemology and philosophy of science that relate to software engineering have been detailed here, though the discussion has certainly not all covered all of them. Before detailing the effects they have on software engineering research, it is necessary also to detail relevant theories that psychologists have devised to explain the role of concepts and theories in our understanding of real-world processes. 5.4.2 The Psychology of Cognition Ultimately, the development of a software system requires that we can conceptualise the real world and convert our understanding into an executable description. But the fact that we can understand and interact with the world at all is quite an astonishing feat in itself! Furthermore, the fact that we can do so without a conscious understanding of how our mind performs this task belies the complexity of the processes involved. Psychologists have been researching the field of cognition for many decades and have devised theories that explain how we develop our conceptual apparatus and how that apparatus is used to allow us to function in the world. Steven Pinker begins his book, How the Mind Works, with the disclaimer, “we don’t understand how the mind works – not nearly as well as we understand how the body works” (Pinker 1997). Despite the fact those complex operations have not been fully understood, psychologists have devised many experiments and illusions that provide glimpses into how the mind operates and they have developed theories that explain both cognition and concept development and utilisation. We must be able to conceptualise in order to function in the world the way that we do. The human brain gets its visual information about the world from the splashes of light that pass through the eyes and onto the retina. The light from the 3-dimensional world forms a 2-dimensional image on the retina which is somehow perceived as a collection of 3-dimensional shapes, objects, surfaces, etc with different depth, texture, and colours. Without the cognitive apparatus required to reconstruct the apparent 3-dimensional structure of the world, everything that we see would just be a constant stream of visual psychedelia. The ability to do so seems to be largely innate in us. Research shows that the limited visual experience of three to four month old infants is enough to perceive the visual milieu as a collection of cohesive, solid, objects that follow natural laws of movement and contact interaction (Kellman 1996; Spelke and Hermer 1996). From the ability to visualise the world as a collection of objects, the mind has evolved the ability to think about those objects as concepts or ideas – the ability to generate A Foundation for Software Engineering Understanding Software Engineering 254 knowledge. Evolutionary theories of natural selection suggest that the ability to generate knowledge and reason about those objects has helped us to deal with and survive in the world (Pinker 1997). From the sensory experience obtained through interaction with real world objects, concepts and categories are developed along with the ability to infer rules concerning their interaction. No two physical situations are exactly alike and the ability to infer in terms of categories has ensured that we do not have to treat every situation as completely new. Consequently, our ability to survive in the world has improved. “An intelligent being ... has to put objects in categories so that it may apply its hard-won knowledge about similar objects, encountered in the past, to the object at hand.” (Pinker 1997) The fact tha t we have conceptual apparatus that enables us to perceive the world in a manner that allows us to develop knowledge about it is interesting for software development. More important though are the theories that explain how that apparatus works and how those concepts and categories are identified. They have significant ramifications for the assumptions that underlie software engineering research. 5.4.2.1 The Classical Theory of Categories The classical theory of categories holds that something is a member of a particular category because it satisfies the set of necessary and sufficient features or attributes that constitute the category’s defining properties, functions, and uses (McCauley 1987). As people interact with the world they become acquainted with the important properties of a particular category as they deal with the individual objects that instantiate them. Perception then, is a decisive process where the person interacts with objects and utilises certain attributes of that object to infer what sort of concept it is. Therefore, categories can be treated as specifications (Bruner 1958). Using this classical theory of categories, learning is a bottom-up process where people start with the simplest objects and categorise them in terms of the essential attributes. Objects that are more complex are then identified by combining the previously defined simple concepts. Moreover, the psychology of cognition can be understood in terms of set theory where objects belong to a particular set based on their defining features and simpler or more complex objects can be understood in terms of set operations such as subsets, intersections, unions, etc (McCauley 1987). A Foundation for Software Engineering Understanding Software Engineering 255 That classical theory became quite popular, however attempts to use it to formally define particular concepts ran into anomalies. For instance, Pinker uses the example of the concept ‘bachelor’. “A bachelor, of course, is simply an adult male who has never been married. But now imagine that a friend asks you to invite some bachelors to her party. What would happen if you used the definition to decide which of the following people to invite? Arthur has been living happily with Alice for the last five years. They have a two-year-old daughter and have never officially married. Bruce was going to be drafted, so he arranged with his friend Barbara to have a justice of the peace marry them so he would be exempt. They have never lived together. He dates a number of women, and plans to have the marriage annulled as soon as he finds someone he wants to marry. Charlie is 17 years old. He lives at home with his parents and is in high school. David is 17 years old. He left home at 13, started a small business, and is now a successful young entrepreneur leading a playboy’s lifestyle in his penthouse apartment. Eli and Edgar are homosexual lovers who have been living together for many years. Faisal is allowed by the law of his native Abu Dhabi to have three wives. He currently has two and is interested in meeting another potential fiancée. Father Gregory is the bishop of the Catholic cathedral at Groton upon Thames. The list ... shows that the straightforward definition ‘bachelor’ does not capture our intuitions about who fits the category. Knowing who is a bachelor is just common sense, but there’s nothing common about common sense. Somehow it just finds its way into human ... brains.” ((Pinker 1997) p. 13) As an extension to Pinker’s challenge to use this simple definition to determine whom to invite, imagine writing the piece of software that defined all of the terms and then A Foundation for Software Engineering Understanding Software Engineering 256 automatically chose bachelors based on the situations presented. The simple definition becomes a complex collection of rules and data specifications. 5.4.2.2 The Prototype Theory of Concept Identification In the mid 70s Eleanor Rosch and her colleagues conducted a number of experiments in cognitive development that the classical theory of categorisation was unable to explain, see for example (Rosch 1978). They found that people do not categorise objects in terms of defining attributes. In addition they discovered that people’s categories do not have clear-cut boundaries and that complex objects are not categorised in terms of features identified in simpler concepts and then abstracted to higherlevel concepts. Rosch suggested people conceptualise objects as belonging to a particular category by developing prototypes – stereotypical examples that a person believes correctly exemplify their understanding of that category. As new objects are perceived they are compared with the prototypes to determine which category they should belong to. Objects are not defined in terms of their essential attributes, they are categorised based upon some typicality rule which compares them to a previously identified exemplar. As people learn about a particular environment, the prototype objects are classified before the more marginal objects can be dealt with. Like the classical theory, the prototype theory establishes that people’s conceptual apparatus is constructed as a hierarchic collection of categories. However, unlike the classical theory, they found it is not generated in a bottom-up fashion. Rosch and her colleagues found that people initially identify what she terms basic-level categories – for example, the category ‘chair’. As experience grows, both more specific and more generic categories are devised to classify objects. For the ‘chair’ example, a more specific, or subordinate category, would be ‘high-chair’ or ‘stool’ while a more generic, or superordinate category, would be ‘furniture’. The basiclevel categories tend to be the easiest to identify and correspond to the objects most often perceived. In addition, members of each basic-level category usually have a family-resemblance and contain many of the same attributes. Furthermore, people tend to have similar ways of interacting with them. However, the superordinate level categories may not possess similar attributes. More detailed descriptions of the classical theory and the results of Rosch’s work can be found in (Rosch 1978; Keil 1987; McCauley 1987; Neisser 1987a; Neisser 1987b; Gelman 1996; Pinker 1997). A Foundation for Software Engineering Understanding Software Engineering 257 5.4.2.3 The Role of Theories in the Understanding of Concepts Since the publication of Rosch’s experimental results, psychologists have been attempting to devise theories of cognition that can successfully explain them. Current theories examine the role that intuitive theories about the world play in our means of conceiving the structure of that world. The classical theory of categories assumed that the world as we see it simply exists and that as we move through it we understand what is going on by abstracting concepts and their interactions from the sensory cues we experience. That is, we are somehow separate from the world and the human mind simply perceives the important properties of particular objects and moves inferentially to concepts and their relationships. In contrast, recent theories suggest that people’s relationship with the world is more interactive. As we act in and with the world, we understand it, partly, by imposing on it our expectations of what concepts and interactions we believe exist. Research has found that rather than being defined in terms of essential characteristics, people categorise objects in terms of they roles the play within intuitive theories about how the world operates. Whereas concepts were traditionally treated as isolated, atomic units, it is now recognised that they are interrelated and influenced by larger knowledge systems of theories. Pinker again uses the ever-popular chair example. “An artifact is an object suitable for attaining some end that a person intends to be used for attaining that end. The mixture of mechanics and psychology makes artifacts a strange category. Artifacts can’t be defined by their shape or their constitution, only by what they can do and by what someone, somewhere, wants them to do. Probably somewhere in the forests of the world there is a knot of branches that uncannily resembles a chair. But like the proverbial falling tree that makes no sound, it is not a chair until someone decides to treat it as one.” (Pinker 1997). There is evidence that suggests children begin to form a set of concepts and tacit theories in their first year of life (Keil 1987; Gelman 1996). As the child encounters and interacts with events in the world, the objects involved tend to be grouped together and form an expectation about how objects will interact in future encounters. Rather than as an analysis of discrete objects in the world, categories are formed by analysing and somehow storing the structure of those events (Fivush 1987). A Foundation for Software Engineering Understanding Software Engineering 258 Researchers propose different theories to explain the exact nature of the interaction between concepts and theories (for example see (Lakoff 1987; McCauley 1987; Meddin and Wattenmaker 1987; Neisser 1987b)). However, they all agree that categories are somehow understood in terms of theories rather than defining features. The term ‘theory’ is often used interchangeably with the term ‘idealised cognitive model’. McCauley explains the differences as follows. “... ‘theory’ in contrast to ‘idealized cognitive model’, connotes constructs that systematically characterize certain aspects of the world, but also a degree of formality, which probably does not apply to all the cognitive structures in question. ... In contrast, ‘idealized cognitive model’ includes less systematic constructs that may not adequately describe the more developed cognitive frameworks that structure large areas of human experience. ... Idealized cognitive models are simplified mental constructs that organize various domains of human experience, both practical and theoretical. Theories should, perhaps, be construed simply as the more elaborate and complex of our idealized cognitive models.” (McCauley 1987). The collection of a person’s knowledge then is not simply a hierarchy of categories abstracted from sensory experience. It is the sum of all these cognitive models or theories. These are then used to plan behaviour and develop new knowledge by mentally playing out combinatorial interactions among them in the mind’s eye (Pinker 1997). Theories not only capture our understanding of concepts, but it is those theories that allow people to conceptualise phenomena as they operate within the constant stream of sensory experience. The world can be conceived as an infinite variety of concepts and properties, people’s innate theories of the world impose an order on this endless amount of detail to allow us to function in it. They form an idealised representation of reality that underemphasises or ignores a huge number of possible features by implicitly assuming their relative lack of importance. “They specify a set of cues in our environment that serve to define the situation and therefore establish expectations about probable changes in the environment and appropriate responses to them.” (McCauley 1987). A Foundation for Software Engineering Understanding Software Engineering 259 For instance, McCauley uses Kant’s explanation of the concept ‘triangle’ to highlight the fact it would be impossible for people to develop their idealised concepts solely by abstracting from experienced instances. “No amount of instances of, for example, a triangle ... could ever be adequate to the concept of a triangle in general. It would never attain the universality of the concept which renders it valid of all triangles, whether right-angled, obtuse-angled, or acute angle; it would always be limited to a part only of this sphere. The schema of the triangle can exist nowhere but in thought ... Still less is an object of experience or its image ever adequate to the empirical concept...” (McCauley 1987). The consequence, as McCauley continues, is that “the world-in-itself is forever inaccessible” (McCauley 1987). The world we conceive has already been filtered by the conceptual apparatus that allows us to cope with that huge amount of detail. The influence of our innate theories on our conceptualisation of reality is highlighted in experimental results that were performed many years before these theories of cognitive development were devised. For example Carmichael showed that concepts identified in language affect how people perceive different shapes (Carmichael, Hogan et al. 1932). Similarly, Wertheimer showed how people automatically attempt to group disparate visual information into clusters so that they can be understood (Wertheimer 1958). 5.4.2.4 Human Understanding and Conceptual Relativism At the basiclevel, people identify similar collections of concepts because they are based on similarities in appearance and function (Rosch 1978). The superordinate and subordinate categories however, are developed through cultural convention and are learned and passed on through language use (McCauley 1987; Neisser 1987a; Pinker 1997). Therefore, as people learn a language they also learn about a culture’s concepts and theories of the world. All documented cultures have words for the elements of space, time, motion, speed, mental states, tools, flora, fauna, and weather, and logical connectives (not, and, same, opposite, part-whole, and general-particular) (Pinker 1997). However, the meaning of words and the means of conceptualising the world is culturally dependent. This has been shown in various studies in sociology and anthropology, see for example (Levi-Strauss 1962; Levi-Strauss 1986; Knudtson and Suzuki 1992; Lee and Karmiloff-Smith 1996). A Foundation for Software Engineering Understanding Software Engineering 260 In addition to cultural dependence, the level of expertise in a domain can affect the conceptualisation of phenomena. There is evidence to suggest that as mastery of a domain occurs the basic-level of the conceiver changes. As a domain is mastered, larger and more complex cognitive models are developed and the new basiclevel becomes the next level in the hierarchy of categories that contains the greatest level of detail (McCauley 1987). The expert is able to categorise objects more efficiently than a novice based on these more sophisticated models of the domain. There is more than one model that can explain a particular situation and it is possible to entertain these models simultaneously. The ability to do so depends on imaginative capacity and different aims and purposes (Meddin and Wattenmaker 1987). Those different models or theories can have different levels of completeness, they may not be fully consistent, and they can provide different starting points from which further knowledge can be inferred. They also represent to different levels of veridicality the world we are trying to conceive. “Our various cognitive models offer alternative descriptions of the world. Everyone recognizes from time to time that certain descriptions are not only less helpful than others (given the problem at hand), but also that some are for all intents and purposes false.” (McCauley 1987). The basic level concepts and theories identify some of what McCauley terms, the “major joints of the world”. However, as “we undertake steps of increasing sophistication ... we rely increasingly on the developed, abstract theories that we consciously entertain.” (McCauley 1987). There is no guarantee that these abstract theories and concepts provide definitive reflections of the world. They can only be relied upon based on their perspicuity rather than proven representational accuracy. The theories of cognition and human perception/conception are more complex than the classical theories suggest. However, researchers note that the classical theory of mind still pervades many theories of science, suggesting it is a carry over from Aristotle’s essentialism (Gelman 1996; Pinker 1997). Nevertheless, it has been superseded with theories that define concepts and categories, not as a collection of essential attributes, but as things that exist within a encompassing theory or model of observed phenomena. Pinker summarises with the following extract. A Foundation for Software Engineering Understanding Software Engineering 261 “Buckminster Fuller once wrote: “Everything you’ve learned ... as obvious becomes less and less obvious as you begin to study the universe. For example, there are no solids in the universe. There’s not even a suggestion of a solid. There are no absolute continuums. There are no surfaces. There are no straight lines.” In another sense, of course, the world does have surfaces and chairs and rabbits and minds. They are knots and patterns and vortices of matter and energy that obey their own laws and ripple through the sector of space-time in which we spend our days. They are not social constructions, not the bits of undigested beef that Scrooge blamed for his vision of Marley’s ghost. But to a mind unequipped to find them, they might as well not exist at all. As the psychologist George Miller has put it, “The crowning intellectual accomplishment of the brain is the real world ... [A]ll [the] fundamental aspects of the real world of our experience are adaptive interpretations of the really real world of physics.” (Pinker 1997) 5.5 Understanding the Foundations of Software Engineering The theories from philosophy and psychology identify issues that have a tremendous significance for software engineering research. Although it is impossible to comprehensively capture all of the theories from these disciplines in such a small number of pages, it is clear the epistemological and metaphysical assumptions that underlie current thinking in software engineering research are, at best, too simplistic – at worst they are fundamentally wrong. The epistemological assumption required to ‘engineer’ the conceptual construct in software development was stated at the beginning of this chapter as follows: The identification of items that can be reused from previous applications, from the requirements ana lysis stage to the implementation stage, assumes that different clients and developers see the same reality and can model it using similar collections of distinct concepts and concept relationships. Moreover, those concepts and relationships can be specifically defined in terms of essential features and represented the same way in two different applications using the implementation medium of software development – hardware and software constructs. A Foundation for Software Engineering Understanding Software Engineering 262 Two aspects of the research shown highlights the inherent difficulty in ‘engineering’ conceptual constructs. They are the nature of concepts and they way the human mind uses conceptual models to understand reality. Both philosophical analysis and psychological experimentation have shown that the human conceptual apparatus does not specify universally applicable definitions of concepts in terms of essential characteristics or attributes. Nor does it identify concepts simply by abstracting them from the observed reality. However, the belief that concepts are defined in such a way is still prevalent outside the areas of philosophy and psychology. That includes the area of software engineering. Many philosophers and psychologists have argued this belief is a product of the still prevailing influence of philosophers who set the foundations for western thinking – Socrates’ quest for true knowledge through definition and Aristotle’s attempts to define all knowledge through essential characteristics (see for example (Popper 1979a; Lakoff 1987; Bechtel 1988a; Gelman 1996; Pinker 1997)). In software engineering, the influence of this implicit philosophical assumption is evident in the justifications used for particular design paradigms. The most recent of these is object-orientation. The relevant literature argues that object-orientation has benefits over previous development approaches because it allows the developer to directly implement the concepts identified in the problem domain. Those design methods claim that requirements are elicited by identifying the phenomena that needs to be automated. Analysis techniques are then used to represent that phenomena as a collection of concepts and relationships. Those concepts can then be specified as objects and classes by identifying the essential properties and functionality. Some researchers even refer to the previously discussed issues from philosophy and psychology as further justification for that belief. For instance, the following references (Sowa 1983; Dillon and Tan 1993; Martin and Odell 1995; Bruegge and Dutoit 1999) all appeal to the concepts of intension and extension as justification for the assumption that concepts can be defined in terms of essential attributes. Consequently, object-orientation allows software developers to successfully implement our models of reality. However, that belief is based on the classical theory of understanding concepts and not on a thorough analysis of the relevant, contemporary explanations provided by those disciplines. The discussion that began this chapter showed that the most recent theories of software development state that requirements should be captured in use-cases. Those use-cases specify snippets of functionality and are extracted from the different perspectives of the A Foundation for Software Engineering Understanding Software Engineering 263 many stakeholders in the deve lopment process. The use-cases are not developed by abstracting concepts from the observed reality, they are a combination of the observed reality and the existing conceptual apparatus of the stakeholder that is applied to that reality in order to understand it. Indeed, the use-cases presented by the stakeholders are subsets of the larger, encompassing theories that are used by that particular stakeholder to understand the entire phenomena that needs to be automated in software. Those encompassing theories or cognitive models could be different for different stakeholders. The level of ‘expertise’ of the clients and customers of the problem domain may also be different to those of the analysts and software developers. Therefore, the concepts and theories deve loped to understand the same phenomena might be different. Although they may have the same label, the precise meaning of the concepts contained within those usecases are dependent on their encompassing theories. Moreover, the use-cases are represented in natural language, which philosophers and psychologists have shown is already theoryladen. Finally, the different people who participant in the requirements elicitation process can utilise different collections of concepts and theories to understand the same phenomena. Therefore, there is no guarantee the theories, and the smaller use-cases, used by the respective stakeholders to understand the phenomena are consistent with each other. Indeed, there is no guarantee that those explanatory theories used by the respective stakeholders are commensurable. Software developers face the dilemma that they have to analyse the requirements presented by the different stakeholders, however, they can only analyse what those different stakeholders have described in the use-cases. They can only analyse what has been said or written in natural language and not necessarily what the stakeholders exactly meant. A number of issues become apparent: • Different stakeholders can represent the same phenomena (a snippet of functionality that needs to be automated) using different use-cases. • The same concept represented in different use-cases can have slightly different meanings in each context. • Different concepts in different use-cases can refer to exactly the same phenomena. A Foundation for Software Engineering Understanding Software Engineering 264 • It may be exceedingly difficult to compare the precise meanings of different usecases even though the concepts and relationships they describe appear to be similar. These issues must be overcome during the analysis and design stages of the development process. Analysis seeks to amalgamate the use-cases into a single cohesive and consistent theory of the phenomena that needs to be automated in software. That theory is referred to as the analysis model or conceptual software architecture. The philosophical and psychological issues presented in this chapter provide insights into the fundamental nature of that analysis model, the factors that influence its creation and evolution, and the way the designer evaluates the effectiveness of the model as it is developed. It is noted that some of the issues discussed here may be construed as design issues rather than analysis issues. Furthermore, the following discussion on the design phase may also contain issues that some researchers may categorise as implementation issues. The aim is not to provide a hard distinction between analysis and design or design and implementation; rather it is to highlight the important issues and not how they should be categorised. The analysis model must consist of a collection of concepts and relationships that, when implemented, realises the aspects of phenomena specified in the system requirements. In general, the understanding of the community outside the disciplines of philosophy and psychology assume those concepts and relationships can be infe rred from observed phenomena and that, necessarily, they must also be evident in the natural language used to capture that phenomena in the requirements. However, contemporary theories in philosophy and psychology contradict this form of positivism and have shown that reality does not consist of easily identifiable parts. Rather, there may be many different but equally valid sets of concepts identified to understand the same phenomena. Those concepts and theories used to represent that phenomena are imposed by our intellects onto reality as a means of understanding the infinitely partitionable sensory experience. Moreover, the concepts and theories devised are not simply abstractions of that observed phenomena but are implicitly applied to sensory experience specifically to help understand the phenomena within the context of a particular problem solving activity. Therefore, many different analysis models can successfully represent the elicited requirements. Furthermore, those models can consist of many different collections of A Foundation for Software Engineering Understanding Software Engineering 265 concepts and relationships, and those concepts and relationships can exist at many different levels of generality. The analysis of the different object-oriented designs for the cruise control systems detailed in chapter 2 of this thesis highlights the situation. The table comparing the identified ‘objects’ in those designs is reproduced here. Each of the seven object-oriented designs identifies a different collection of concepts used to develop a model that represents the problem to be solved. Moreover, that collection does not even consider the other software designs that utilised different design paradigms. Each of the models represents an understanding of the cruise control problem, however the precise meaning of the concepts used in each one is specific to the context of the model they appear in. Design Example Objects Identified. Booch Driver, Brake, Engine, Clock, Wheel, Current speed, Desired speed, Throttle, Accelerator. (9) Yin & Tanik Driver, Brake, Engine, Clock, Wheel, Cruise control system, Throttle, Accelerator. (8) Birchenough Driver, Wheels, Accelerator. (3) Gomaa (JSD) Cruise control, Calibration, Drive shaft. (3) Wasserman Cruise controller, Engine monitor, Cruise monitor, Brake pedal monitor, Engine events, Cruise events, Brake events, Speed, Throttle actuator, Drive shaft sensor. (10) Appelbe & Abowd Driver, Brake, Engine, Clock, Wheel, Cruise controller, Throttle. (7) Gomaa (Booch OO) Brake, Engine, Cruise control input, Cruise control, Desired speed, Throttle, Current speed, Distance, Calibration input, Calibration constant, Shaft, Shaft Count. (12) Table 5-1: Cruise Control ‘Objects’ (from Chapter 2) The ability to develop models at different levels of generality is also highlighted in the other cruise control examples. In addition to the object-oriented designs, three researchers developed models using the notion of feedback control systems to identify the appropriate concepts and relationships. The concepts identified by Higgins and Shaw are represented A Foundation for Software Engineering Understanding Software Engineering 266 in the following table (Table 5-2). Both developers initially created a model of the system that consisted of generic, feedback control concepts. Those concepts were then replaced by concepts that are specific to the particular problem domain – cruise control systems. Design Example Objects Identified. Higgins (generic) Actuating entity, Reference input, Summing point, Control action, Controller, Control signal, Disturbing entity, Disturbance, Controlled system, Controlled output, Feedback elements, Primary feedback signal. Higgins (specific) Driver, New desired speed, Set speed Summing point, Desired speed, Set throttle pressure summing point, Throttle pressure, Throttle, Power, Environment, Speed gain/loss, Car, Speed, Speed sensor, Measured speed Higgins (complex) Car on summing point, Car on signal, Cruise control on summing pt., Cruise control on signal, CC active summing pt., CC active signal, Set speed summing pt., Desired speed, Set throttle pressure summing pt., Throttle pressure, Throttle, Power, Environment, Speed gain/loss, Car, Driver, Press brake, Press accelerator, Speed, speed sensor, Measured speed, New desired speed, Brake/Accelerator sensor. Shaw (generic) Set point, Controller, Input variable, Process, Change to manipulated variable, Controlled variable. Shaw (specific) Activate/Inactivate switch, Controller, Desired speed, Throttle setting, Engine, Wheel rotation, Pulses from wheel. Table 5-2: Generic and Specific Cruise Control Concepts Both of those designs represent the same problem, however, they identify different concepts to the other object-oriented designs. Higgins goes further and provides a more complex design based on a more sophisticated generic model of feedback systems. Again, this model is a valid representation of the problem – the model is just more complex. The original, generic feedback designs also provide valid models of the cruise control problem. They represent the same reality as the specific models, they are just at a different level of abstraction. To use the terminology provided by Rosch, the objectA Foundation for Software Engineering Understanding Software Engineering 267 oriented designs represent the basiclevel categories while the generic feedback models consist of superordinatelevel categories. While some people may conceive of the cruise control problem in terms of basic-level categories, it is equally valid that someone with expertise in the domain of feedback control will conceptualise the same reality in terms of the superordinate categories. In Chapter 2, the design reasoning of Jones, who was trained as an electronic engineer, illustrates that different way of understanding. The preceding philosophical and psychological foundations show that none of these different models is a better match with reality than the others. The only characteristic that can be used to differentiate between them is the ‘usefulness’ of each model for solving the exact requirements of the problem. The foundational issues identified also state that the collection of concepts and relationships that constitute the analysis models must be constrained by logical consistency and coherence. The field of human anatomy provides an interesting illustrative example because the human body is a large and incredibly complex system, and it is something we all possess an example of. The two most prominent ways of modelling the gross organisational structure of the anatomical system is in terms of regional topography and functional systems. Table 5-3 contrasts the structural arrangement between two popular texts on anatomy and physiology. The editorial board of Gray’s Anatomy detail the rationale for the choice of organisational structure: “Gray’s Anatomy was founded on the principle that to understand the body’s construction it is necessary to analyze it in terms of its component systems as well as its regional topography. ... Of course, this arrangement is to some extent an artificial separation of what in the body are intimately interdependent components, both during development and in the mature body. It is obvious that whilst there are indeed many clinical conditions where dysfunction of a particular system occurs, there are many others in which topographical nearness of different systems is the prime consideration. ... Clearly what is needed is both a systematic account and a regional, topographical one. ... This would require much more than a single volume.” (Gray et al 1995) A Foundation for Software Engineering Understanding Software Engineering 268 Anatomy, regional and applied (Last 1978) Gray's Anatomy (Gray et al 1995) Discusses the smallest ‘components’ larger than cells. Skin, muscles, tendons, bones, joints, mucous membranes, serous membranes, blood vessels, lymphatics. It also discusses the nervous system. It then partitions the body into: Upper Limb, Lower Limb, Thorax, Abdomen, Head & Neck, and Central Nervous System. Partitions the body into the following major components: Cells & Tissues, Integumental System, Skeletal System, Muscle, Nervous System, Haemolymphoid System, Cardiovascular System, Respiratory System, Alimentary System, Urinary System, Reproductive System, Endochrine System, and Surface Anatomy. Table 5-3: Contrasting Anatomical Models If the human anatomy was to be implemented as a software system, both of these structural arrangements would result in different analysis models or conceptual architectures in which the concepts constitute a coherent and consistent system. As Gray states, the most appropriate conceptual model would be dependent on its intended function. In contrast, a conceptual model that consisted of lower limbs, thorax, respiratory system, skeletal system, and alimentary system would not be logically consistent. The different large-scale concepts overlap in function due to varying levels of generality. Moreover, it can immediately be seen that many functions would be impossible to implement due to our intimate knowledge of what bodies do. For instance, which concept contains the implementation of the femur (thigh bone), the ‘lower limb’ or the ‘skeletal system’? How could this body ‘see’ anything without any concept implementing a pair of eyes? The consistency of the conceptual model of this fictitious body is obviously flawed; our detailed knowledge of the body’s functionality and small-scale componentry make it obvious. However, how is it possible to detect analogous, logical, inconsistencies in the conceptual models of systems in domains in which we do not possess such intimate knowledge? The only method of assurance is a constant process of validation of the 31 Skin and its derivatives: hair, nails, glands, etc. 32 Blood and its derivatives: red blood cells, bone marrow, hemoglobin, etc. 33 Food consumption and processing. A Foundation for Software Engineering Understanding Software Engineering 269 model as its design proceeds. As Popper noted, the process consists of a continuous application of conjectures and refutations until a model is developed that can not be falsified. The knowledge required to identify inconsistencies in a model is dependent on the purpose of the model. However, that knowledge is not always immediately obvious. To repeat the quote used earlier, “To understand a problem means to understand its difficulties; and to understand its difficulties means to understand why it is not easily soluble – why the more obvious solutions do not work. We produce the obvious solution and then criticize them, in order to find out why they do not work. In this way, we become acquainted with the problem, and may proceed from bad solutions to better ones – provided always that we have the creative ability to produce new guesses, and more new guesses. ... If we have been working on a problem long enough, and intensively enough, we begin to know it, to understand it, in the sense that we know what kind of guess or conjecture or hypothesis will not do at all, because it simply misses the point of the problem, and what kind of requirements would have to be met by any serious attempt to solve it. We begin to see the ramifications of the problem, its subproblems, and its connections with other problems.” (Popper 1979d) This issue highlights immediate questions for software engineering research. For instance, the analysis model of the system can also be referred to as the system’s logical or conceptual architecture. Research suggests that model, or architecture, should be created relatively early in the development process and it then sets the path for subsequent steps in that process. However, philosophy of science suggests that while developers may possess the knowledge required to validate that model early on in the development process, they may not possess the knowledge required to successfully falsify it. Moreover, it may not be until the development process is well into the design, or even the implementation stage, that that knowledge is generated by the developers. This would appear to supply some credence to software engineering researchers who claim the architecture of a software system cannot always be determined in the earliest stages of the design process as theory suggests, and that there are cases where it need not be. (Reed 1987). 34 Regulation of internal functions. A Foundation for Software Engineering Understanding Software Engineering 270 A second issue that makes it difficult for developers to refute the proposed analysis model concerns whose knowledge has been used to develop the analysis model. Conceptual relativism suggests that while there is some common sense realism in that we all experience the same reality as sensory inputs, the conceptual arrangements we use to understand that sense data depends on our previous experiences and level of expertise. The precise meaning of the concepts and theories developed to understand and solve a particular problem are subjective to the person understanding it. However, the analysis model is derived from the requirements use-cases and they are generated by a number of different stakeholders in the development process. In many situations, the clients and users of the system, who help to generate the use-cases, have a greater level of expertise or knowledge in the particular problem domain than the system analysts and developers. Therefore, the precise meanings of the concepts and relationships specified by the clients in the development of use-cases would be different to those of the developers. The usecases represent snippets of the client’s model of how the problem domain operates, while the analysis model is a product of the developer’s understanding of the client’s model. As Popper states, it may not be until the developers have worked on the problem for a very long time, perhaps until the system implementation, that they fully ‘understand’ what the clients had intended. The fact that use-cases are represented in natural language serves to exacerbate the problem. Following from the previous arguments of Swartz about intension and extension, the majority of software development is performed in problem domains that consist of concepts that can not be precisely defined. Therefore, it would be easy for clients and software developers to have different understandings of the same labelled concept because of the spectrum of its ambiguity. The process of design aims to transform the conceptual model developed during the analysis stage into a collection of concepts and relationships that are implementable in software. The theories presented from philosophy and psychology uncover foundational issues that concern differences between the concepts used in analysis and those used in design. They also uncover issues concerning the influence of design criteria on the preceding analysis process, and how the evolving design model can be evaluated. To implement the concepts and relationships present in the analysis model, the designers can only utilise the constructs provided by the implementation medium of software development. That implementation medium consists of programming language constructs, operating system services, hardware execution constructs, external ‘offtheA Foundation for Software Engineering Understanding Software Engineering 271 shelf’ software components, and the virtual machine that executes the resulting software/hardware implementation to realise the solution to the problem. Different programming languages offer a different variety of implementation constructs. Northrop (Northrop and Richardson 1991) has classified them into the following design categories: function-orientated design, data flow-orientated design, data structure-orientated design, object-based design, and object-orientated design. Object-oriented languages are claimed to have advantages over other programming languages because they allow the implementation of the ‘concepts we perceive’ by encapsulating data and functionality into a single implementation construct. Objectoriented design proceeds by identifying and specifying the properties and functionality of the concepts identified in the analysis model. However, the ‘refinement’ of previously identified concepts occurs within the influence of other constraints placed on the developers. Those constraints include: • The partitioning of the solution into major components and their means of communication – the system architecture. • The stipulation of interfaces for those components to specify the exact nature of component interaction. • The specification of control flow to stipulate how the system will be executed by the machine to realise the solution. • The consideration of non-functional requirements such as system performance, maintainability, and modifiability. • The desire to utilise previously existing software components. Jacobson et al (Jacobson, Booch et al. 1998) provide a comparison of the differences between the analysis and design models: A Foundation for Software Engineering Understanding Software Engineering 272 Analysis Model Design Model Conceptual model, because it is an abstraction of the system and avoids implementation issues Physical model, because it is a blueprint of the implementation Design-generic (applicable to several designs) Not generic, but specific for an implementation Three (conceptual) stereotypes on classes: <>, <>, and <> Any number of physical stereotypes on classes, depending on the implementation language Less formal More formal Less expensive to develop (1:5 ratio to design) More expensive to develop (5:1 ratio to analysis) Few layers Many layers Dynamic (but not much focus on sequence) Dynamic (much focus on sequence) Outlines the design of the system, including its architecture Manifests the design of the system, including its architecture (one of its views) Primarily created by “leg work”, in workshops and the like Primarily created by “visual programming” in round-trip engineering environments; the design model is “round-trip engineered” with the implementation model May not be maintained throughout the complete software lifecycle Should be maintained throughout the complete software lifecycle Defines a structure that is an essential input to shaping the system – including creating the design model Shapes the system while trying to preserve the structure defined by the analysis model as much as possible. Table 5-4: Comparison of the Analysis Model and the Design Model (from (Jacobson, Booch et al. 1998) p. 219) A Foundation for Software Engineering Understanding Software Engineering 273 While there are specific differences between the two it is assumed that the design model is based, in part, on a refinement of what exists in the conceptual model. However, the issues of philosophy and psychology show there are more significant differences than those presented in conventional object-oriented design literature. Analysis Model Design Model Concepts and relationships can not be precisely defined by intension. Concepts and relationships must be defined by essential features and specific functionality The precise meaning of concepts and relationships is dependent on the context of the theory in which they are contained The precise meaning of concepts and relationships, their definitions, are independent of the system in which they are implemented. Concepts and relationships are constrained only by the previous experience and imaginative ability of the stakeholders in the development process Concepts and relationships are constrained by the constructs provided by the implementation medium and the execution model of the virtual machine that executes it. Table 5-5: Comparison of the Analysis Model and Design Model based on the Philosophical and Psychological Foundations One of the claimed advantages of object-oriented development is that developers can use objects in a uniform modelling approach throughout the development process (Kaindl 1999). That belief is based on the classical theory of conceptual understanding, which states that all concepts can be specified in terms of essential features or intensional definitions. As philosophers and psychologists have noted, the classical theory of categorisation has proved too simplistic to explain the human thought process and has now been superseded by more sophisticated theories. The foundations show that the components of the analysis and design models, though the same label may be used to refer to them, represent inherently different things. This explains why the transition from object-oriented analysis to design is not as easy as suggested by object-oriented design methods. Those methods suggests the transition is smooth and easy, in practice it has been shown to be quite difficult (Kaindl 1999). This also explains why researchers are beginning to question the assumption that object-orientated development is advantageous A Foundation for Software Engineering Understanding Software Engineering 274 because it allows developers to more easily implement their model of reality (see for example (Hatton 1998)). As stated previously, the problem with the classical theory of definition is that concepts are defined in terms of attributes, which themselves have to be defined. The result is an infinite regress of definitions. That problem is not faced by design model concepts in software development because they have been built by aggregating, encapsulating, and abstracting the constructs provided by the implementation medium. Analysis model concepts can not be defined because there exists no axiomatic level of definition in our conceptual apparatus. In contrast, the concepts used in the design and implementation models of software are built on top of the axiomatic definitions of the Von Neumann computer architecture. Progress in software development has produced abstractions that allow developers to design and implement above that axiomatic level. Moreover, the progress from machine code languages to assembler level languages, and then data floworientated design, data structure-orientated design, function-orientated design, objectbased design, and object-orientated design has made it appear as though developers can now analyse, design, and implement systems using notations that closely match our models of reality. That justification for progress in software design methods and languages has been based on the classical theory of concept definition. Research in the fields of philosophy and psychology has shown that view is too simplistic. Dreyfus has noted this same phenomenon in his critique of artificial intelligence research (Dreyfus 1992). Artificial intelligence attempts to formalise intelligent activity by transforming it into a set of computer instructions. He shows this is based on the ontological assumption that explicit facts exist in the world and they can be formalised in the contextfree environment of computer software. That assumption is similar to the one implicitly made by software engineers and, as has been suggested by philosophers and psychologists, is also made by most communities, both scientific and non-scientific, who seek to understand human thought processes. Dreyfus quotes Chomsky (from Language and Mind) to note the predisposition of researchers to use simplistic examples to justify the belief in the classical theory of understanding. 35 Other software engineering researches that exemplify these foundational issues are presented in the next chapter. A Foundation for Software Engineering Understanding Software Engineering 275 “There has been a natural but unfortunate tendency to ‘extrapolate’, from a thimbleful of knowledge that has been obtained in careful experimental work and rigorous data-processing, to issues of much wider significance and of greater social concern.” ((Dreyfus 1992) p. 79) The previous discussion of the analysis model considered the effects of the philosophical and psychological foundations on how that model is evaluated during development. Those effects have even more ramifications during the design and implementation stage. If the developers don’t develop the knowledge required to falsify the model until the design or implementation phases, and a falsifying example then appears, does the model need to be replaced? It may be that the originally conceived collection of concepts and relationships appeared adequate to satisfy the required properties in the implementation. However, a new situation may present itself during the implementation stage that was not previously considered. Similarly, the requirements may be modified to consider a new situation that was not previously required. Does the model need to be substantially modified or can the required properties be implemented within the previously existing concepts? Can the conceptual integrity of the model be ‘fudged’ to ensure the designed model continues to satisfy the requirements? These are all questions for future research.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Why We Need A Different View of Software Architecture

The definition and understanding of software architectures and architecture views still shows considerable disagreement in the software engineering community. This paper argues that the problems we face exist because our understanding is based on specious analogies with traditionally engineered artefacts. A review of the history of ideas shows the evolution of this understanding. A detailed exa...

متن کامل

Chemistry and Biochemistry Training in Medical Sciences: The Need to Use Kinetic Schemas in Virtual Class

Many disciplines in the collection of medical sciences and engineering are based on the basis of chemistry. In order to continue teaching learners in the coronavirus disease situation and to continue the curriculum, various solutions have been proposed and presented, among which it is expected that using technology, the method of educators changes from the traditional approach. New ideas that l...

متن کامل

A viewpoint on software engineering and information systems: integrating the disciplines

The design and implementation of effective processes and technology for the delivery of computer-based information systems (IS) in organisations remains a difficult academic and professional problem. It is difficult to marry the rigorous analysis associated with an engineering-based discipline with the softer people-focused discourse required in developing ISs which essentially model social pro...

متن کامل

Scientific rigour, an answer to a pragmatic question: a linguistic framework for software engineerin - Software Engineering, 2001. ICSE 2001. Proceedings of the 23rd International Conference on

Discussions of the role of mathematics in software engineering are common and have probably not changed much over the last few decades. There is now much discussion about the “intuitive” nature of software construction and analogies are drawn (falsely) with graphic design, (conventional) architecture, etc. The conclusion is that mathematics is an unnecessary luxury and that, like these other di...

متن کامل

Software Development as an Engineering Problem

It is hoped that software development can become a branch of engineering, but there are important differences. Software is intangible, complex, and capable of being transformed by a computer. Much effort has been devoted to overcoming the difficulties due to intangibility and complexity, but too little has been devoted to exploiting the third characteristic. A processoriented view of software m...

متن کامل

Lessons from Engineering - Can Software Benefit from Product based Evidence of Reliability?

This paper argues that software engineering should not overlook the lessons learned by other engineering disciplines with longer established histories. As software engineering evolves it should focus not only on application functionality but also on mature engineering concepts such as reliability, dependability, safety, failure mode analysis, and maintenance. Software is rapidly approaching the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002